When AI Gets It Wrong: The Google Search Mess

 


Google's new AI-powered search feature was supposed to make finding information easier. Instead, it told some users to put glue on their pizza. This incident, along with others, highlights a growing problem with artificial intelligence: it can get things spectacularly wrong.
The feature, called "AI Overviews," was rolled out to US users in May 2024. It was designed to summarize search results, saving people from the effort of scrolling through endless links. But almost immediately, users began sharing bizarre and wildly inaccurate answers online. One user asking how to get cheese to stick to pizza was advised to use "non-toxic glue." Another search suggested that geologists recommend eating one small rock per day.
These errors, which apparently stemmed from satirical websites and Reddit comments, quickly went viral. While Google described them as "isolated examples" from "uncommon queries," the blunders raise serious questions about the reliability of AI as it becomes more integrated into our daily lives. As we rely more on AI for quick answers, what happens when we can no longer trust the information it provides?
This post will explore the recent issues with Google's AI search, the broader problem of AI "hallucinations," and what it all means for the future of information.

What is Google's AI Overview?

Google's AI Overviews is a feature that uses generative AI to create a concise summary of information at the top of the search results page. Instead of requiring you to click through multiple websites, the goal is for you to get a direct, comprehensive answer to your query.
For example, if you search for "best hiking trails near me," the AI Overview might present a list of top-rated trails with brief descriptions, difficulty levels, and links to learn more. It’s meant to "take the legwork out of searching," consolidating information from various web pages into one easy-to-read block.
This feature represents a major shift in how search engines operate. For decades, Google has acted as a gateway to the web, pointing users toward other websites. With AI Overviews, Google is transforming into an answer engine, aiming to provide the information itself. This change is fundamental to its strategy to protect and future-proof its dominance in the search market, which currently accounts for over 90% globally. (Global search engine desktop market share 2025, 2025)

When Good AI Goes Bad

The launch of AI Overviews in the US was quickly overshadowed by reports of its strange and often dangerous advice. The "glue on pizza" and "eat a rock a day" suggestions were just the beginning.
Other notable errors included:
  • A recipe for a "spicy spaghetti dish" that called for gasoline.
  • A claim that former US President James Madison graduated from the same university 21 times.
  • An answer stating a dog has played in the NBA, NFL, and NHL.
These AI-generated responses, known as "hallucinations," happen when the model presents false information as fact. In these cases, the AI appeared to pull information from unreliable sources, such as the satirical news site The Onion or Reddit forums, without verifying its accuracy or context.
Google stated that the "vast majority of AI overviews provide high-quality information" and that it was using these examples to refine its systems. However, this isn't the first time the company has faced problems with its AI products. In February 2024, it paused its Gemini chatbot after it was criticized for generating historically inaccurate images. (Raghavan & Prabhakar, 2024) Its predecessor, Bard, also had a notoriously flawed launch. (Googlers say Bard AI is “worse than useless,” ethics concerns were ignored, 2023)

Why Does AI Hallucinate?

AI hallucinations are not exclusive to Google. All large language models (LLMs) are prone to making things up. (Leiser et al., 2024) Understanding why this happens requires a basic grasp of how they function.
LLMs are trained on massive datasets of text and code from the internet. Through this training, they learn to recognize patterns, relationships, and structures in language. When you give an LLM a prompt, it doesn't "think" or "understand" in a human sense. Instead, it predicts the most statistically probable sequence of words to form a coherent response based on the patterns it has learned.
This process can go wrong in a few key ways:

Training on Flawed Data

The internet is filled with misinformation, satire, and user-generated content of varying quality. If an AI is trained on a Reddit comment joking about putting glue on pizza, it might not have the ability to distinguish that from a genuine cooking tip. The model absorbs the data it's given, good or bad.

Lack of True Comprehension

An AI doesn't understand that glue is an adhesive and not a food item. It only knows that the words "glue," "cheese," and "pizza" appeared together in a context that seemed to answer a similar question. It's a sophisticated form of pattern matching, not genuine reasoning or common sense.

The "Confidence" Problem

AI models are designed to be helpful and provide answers, even when they don't have enough reliable information. They will confidently string together a plausible-sounding sentence that is completely fabricated, because generating an answer is their primary goal.
While developers are constantly working on ways to reduce hallucinations, such as improving training data and implementing better fact-checking mechanisms, it remains a fundamental challenge for the technology.

Your Guide to a Future with AI

The viral blunders from Google are a stark reminder that while AI is a powerful tool, it is not infallible. As AI becomes more integrated into search engines, work software, and everyday apps, the ability to critically evaluate the information it provides is more important than ever.
The "glue pizza" incident may be humorous, but it underscores a serious issue. If we start to unquestioningly trust AI-generated answers for medical advice, financial guidance, or complex research, the consequences could be severe.
For businesses and professionals, these events highlight the need for a "human-in-the-loop" approach. AI can augment our work, automate repetitive tasks, and generate first drafts, but human oversight, critical thinking, and final approval remain essential. The future isn't about replacing human intelligence with artificial intelligence, but about using AI to enhance our own capabilities. The companies that succeed will be those that strike the right balance between automation and human expertise.

Post a Comment

Previous Post Next Post